Bias, My Anatolian Compatriot
Author: Mithat Gönen, Ph.D.
If you’re a statistician, the word bias triggers a reflex. We are trained to hunt it down, diagnose it, quantify it, adjust for it, and—if possible—eliminate it. Selection bias, information bias, confounding bias, algorithmic bias. Bias is the villain of our discipline.
Which makes it quietly ironic that one of the earliest thinkers to grapple seriously with judgment and fairness was a man named Bias. An actual person: Bias of Priene, an ancient city in today’s Western Türkiye, about sixty kilometers south of Ephesus. I grew up spending my summers near Priene, yet I did not hear of Bias—my Anatolian compatriot—until well after my statistical training was complete. Bias was a lawyer, statesman, and sage, deeply embedded in the civic life of his city.
Somewhere along the way, his name took a spectacular reputational hit.
Today, “bias” means distortion, skew, systematic error. In ancient Greece, Bias of Priene was remembered for moderation, restraint, and resistance to rigid rule-following. The man whose name became shorthand for error spent his life warning against it. That reversal should give us pause.
What emerges from the fragments and anecdotes we have is a coherent view of fairness that feels surprisingly contemporary. Bias believed that applying the same rule to everyone does not automatically produce justice. Fairness, for him, was not sameness. It required judgment, context, and an awareness of imbalance. Laws, he thought, should protect those who lack power, not merely formalize the advantages of those who already have it.
This puts him immediately at odds with one of our favorite modern instincts: uniformity. One model, one loss function, one threshold, applied to everyone. Clean. Elegant. Reproducible. And often biased.
Bias distrusted rigid systems. He believed that strict adherence to rules—especially when divorced from context—could itself become a source of error. That suspicion should resonate with anyone who has watched a statistical analysis yield a confident conclusion that nonetheless felt wrong. A hard cutoff here, an exclusion criterion there, a regression coefficient interpreted causally when assignment was anything but random. Bias would not have admired this kind of intellectual shortcut.
He was also, by all accounts, pessimistic about human nature. One of his most cited sayings—often translated as “most men are bad”—reads less like misanthropy than realism. People respond to incentives. Power distorts judgment. Systems invite exploitation. For Bias, sound judgment could not rely on good intentions alone; it had to be constructed with failure, misuse, and strategic behavior in mind.
Fast-forward 2,600 years and replace the courtroom with a server room—or a clinical trial registry.
Modern statistics and machine learning are full of claims of neutrality. The model treats everyone the same. The algorithm is objective. The data speak for themselves. Bias of Priene would likely be unconvinced. He assumed that systems reflect how they are built: what is measured, what is ignored, who is included, and who is not.
Selection bias, for example, is not merely a technical nuisance. When patients are non-randomly assigned to treatments, when participation depends on access or motivation, or when datasets quietly exclude inconvenient cases, the resulting estimates may be precise, stable, and reproducible—and still wrong. From Bias’s perspective, saying “that’s just the data we have” would miss the point. Judgment begins well before estimation.
The same applies to measurement error. Variables that are easy to observe often stand in for variables that matter, imperfectly and asymmetrically. Noise is not always random. Some outcomes are measured carefully, others crudely; some populations are observed continuously, others sporadically. Treating all error as harmless variance rather than structured distortion would strike Bias as a conceptual mistake.
Algorithmic bias follows the same pattern. A uniform decision rule can feel fair because it is symmetrical, but symmetry is not accuracy. Errors propagate unevenly. Some cancel out; others compound. Some patients, borrowers, defendants, or applicants absorb mistakes far more painfully than others. Bias would insist that models be evaluated not only by internal metrics, but by how their errors behave once deployed.
Even our comfort with sharp thresholds would have troubled him. Bias advised restraint—“be slow to speak”—which sounds very much like a warning against overconfidence. Treating probabilistic outputs as certainties, collapsing uncertainty into binary decisions, or outsourcing judgment entirely to automated procedures reflects not only methodological convenience, but a broader, often unwarranted and sometimes self-serving optimism—particularly common in entrepreneurial and computational circles—that technical ingenuity can substitute for judgment in solving deeply human problems.
What makes Bias especially relevant for us is that he is not a distant philosophical import. He is local. Anatolian. Part of a tradition of pragmatic reasoning that understood judgment as something exercised, negotiated, and revised—not computed once and for all. His concerns mirror the questions we confront daily as statisticians: Which assumptions are doing the real work? Where does convenience replace understanding? When does methodological cleanliness mask conceptual fragility? Where should judgment intervene?
Perhaps the most uncomfortable lesson Bias offers us as statisticians is this: technical rigor is not enough. Large samples, elegant models, and vast computational power do not guarantee sound judgment. Bias does not disappear simply because it is formalized.
Bias the sage would probably find it amusing that his name now represents the very thing he spent his life cautioning against. Then again, he might simply advise us—as colleagues—to pause, look more carefully at our assumptions, and think a little harder before declaring our conclusions “objective.”
That advice still feels unreasonably modern.